Learn to Compress and Restore Sequential Data
نویسندگان
چکیده
Data compression methods can be classified into two groups: lossless and lossy. Usually the latter achieves a higher compression ratio than the former. However, to develop a lossy compression method, we have to know, for a given type of data, what information can be discarded without significant degradation of the data quality. A usual way to obtain such knowledge is by experiments. For example, from user statistics, we know that human eyes are insensitive to some frequency channels of the light signal. Thus we can compress image data by decomposing them into various frequency channels using a DCT transformation, and neglect the coefficients of the channels that are insensitive to human eyes. However, it is complex and expensive for human analysts to conduct and study so many experiments. Alternatively, we propose to learn the knowledge automatically by using machine learning techniques. Under the framework of Bayesian learning, general prior knowledge is expressed by designing the statistical models, and the refined posterior knowledge can be learned automatically from data to be compressed. More particularly, we consider the compression of some input data as learning a statistical model from the data, and consider the restoration of data as sampling from the learned model. Therefore, only the estimated model parameters are saved as the compressed version. A key to this idea is to design a statistical model that can accurately describe the data (so it is possible to recover the data precisely) and is defined by a compact set of parameters (so to achieve high compression ratio). For a general application of compressing sequential data, we designed the Variable-length Hidden Markov Model (VLHMM), whose learning algorithm automatically learns a minimal set of parameters (by optimizing a MinimumEntropy criterion) that accurately models the sequential data (by optimizing a Maximum-Likelihood criterion). The selfadaption ability of the learning algorithm makes VLHMM
منابع مشابه
First-order Term Compression: Techniques and Applications
Lossless sequential data compression techniques are well-known. Such techniques work best on data with repetition in its sequential representation. One might wish to compress data that can possess non-sequential repetitive structure, for example, first-order terms. Sequential techniques often do not compress such data well. This paper presents improved compression techniques that can take advan...
متن کاملعنوان : Comparing the effect of warm moist compress and Calendula ointment on the severity of phlebitis caused by 50% dextrose infusion: A clinical trial
چکیده: Background: One of the important hypertonic solutions is 50% dextrose. Phlebitis is the most common complication of this solution, the management of which is quite necessary. Regarding this, the present study aimed to compare the effect of warm moist compress and Calendula ointment on the severity of phlebitis caused by 50% dextrose infusion. Methods: This clinical trial was conducted on...
متن کاملارائه روشی برای پیشپردازش تصویر جهت بهبود عملکرد JPEG
A lot of researchs have been performed in image compression and different methods have been proposed. Each of the existing methods presents different compression rates on various images. By identifing the effective parameters in a compression algorithm and strengthen them in the preprocessing stage, the compression rate of the algorithm can be improved. JPEG is one of the successful compression...
متن کاملSpeech Emotion Recognition Using Scalogram Based Deep Structure
Speech Emotion Recognition (SER) is an important part of speech-based Human-Computer Interface (HCI) applications. Previous SER methods rely on the extraction of features and training an appropriate classifier. However, most of those features can be affected by emotionally irrelevant factors such as gender, speaking styles and environment. Here, an SER method has been proposed based on a concat...
متن کاملConvergence in a sequential two stages decision making process
We analyze a sequential decision making process, in which at each stepthe decision is made in two stages. In the rst stage a partially optimalaction is chosen, which allows the decision maker to learn how to improveit under the new environment. We show how inertia (cost of changing)may lead the process to converge to a routine where no further changesare made. We illustrate our scheme with some...
متن کامل